68 research outputs found

    Automatic system for personal authentication using the retinal vessel tree as biometric pattern

    Get PDF
    [Resumen] La autenticación fiable de personas es un servicio cuya demanda aumenta en muchos campos, no sólo en entornos policiales o militares sino también en aplicaciones civiles tales como el control de acceso a zonas restringidas o la gestión de transacciones nancieras. Los sistemas de autenticación tradicionales están basados en el conocimiento (una palabra clave o un PIN ) o en la posesión (una tarjeta, o una llave). Dichos sistemas no son su cientemente ables en numerosos entornos, debido a su incapacidad común para diferenciar entre un usuario verdaderamente autorizado y otro que fraudulentamente haya adquirido el privilegio. Una solución para estos problemas se encuentra en las tecnologías de autenticación basadas en biometría. Un sistema biométrico es un sistema de reconocimiento de patrones que establece la autenticidad de los individuos caracterizándolos por medio de alguna característica física o de comportamiento. Existen muchas tecnologías de autenticación, algunas de ellas ya implementadas en paquetes comerciales. Las técnicas biométricas más comunes son la huella digital, probablemente la característica más antigua usada en biometría, iris, cara, geometría de la mano y, en cuanto a las características de comportamiento, reconocimiento de voz y rma. Hoy en día, la mayoría de los esfuerzos en los sistemas biométricos van encaminados al diseño de entornos más xi xii seguros donde sea más difícil, o virtualmente imposible, crear una copia de las propiedades utilizadas en el sistema para discriminar entre usuarios autorizados y no autorizados. En este contexto, el patrón de vasos sanguíneos en la retina se presenta como una característica biométrica relativamente joven pero muy interesante debido a sus propiedades inherentes. La más importante es que se trata de un patrón único para cada individuo. Además, al ser una característica interna es casi imposible crear una copia falsa. Por último, otra propiedad interesante es que el patrón no cambia signi cativamente a lo largo del tiempo excepto en casos de algunas patologías serias y no muy comunes. Por todo ello, el patrón de retina puede ser considerado un rasgo biométrico válido para la autenticación personal ya que es único, invariante en el tiempo y casi imposible de imitar. Por otra parte, el mayor incoveniente en el uso del patrón de vasos de la retina como característica biométrica radica en la etapa de adquisición todav ía percibida por el usuario como invasiva e incómoda. Hoy en día, existen mecanismos para obtener imágenes digitales de manera instantánea a través de cámaras no invasivas pero estos avances requieren a su vez una mayor tolerancia a variaciones en la calidad de la imagen adquirida y, por tanto, métodos computacionales más elaborados que sean capaces de procesar la información en entornos más heterogéneos. En esta tesis se presenta un nuevo sistema de autenticación automático usando el árbol retiniano como característica biométrica. El objetivo es diseñar y desarrollar un patrón biométrico robusto y compacto que sea fácilmente manejable y almacenable en dispositivos móviles de hoy en día como tarjetas con chip. La plantilla biométrica desarrollada a partir del árbol retiniano consiste en sus puntos característicos (bifurcaciones y cruces entre vasos) de forma que no sea necesario el almacenamiento y procesado de todo el árbol para realizar la autenticación

    Data Extraction in Insurance Photo-Inspections Using Computer Vision

    Get PDF
    [Abstract] Recent advances in computer vision and artificial intelligence allow for a better processing of complex information in many fields of human activity. One such field is vehicle expertise and inspection. This paper presents the development of systems for the automatic reading of French and Spanish license plates, as well as odometer value reading in dashboard photographs. These were trained and validated with real examples of more than 4000 vehicles, while addressing typical problems with irregular data acquisition. The systems proposed have found use in a real environment and are employed as assistance in vehicle appraisal.This research was co-funded by Ministerio de Ciencia, Innovación y Universidades and the European Union (European Regional Development Fund—ERDF) as part of the RTC-2017-5908-7 research project. We wish to acknowledge the support received from the Centro de Investigación de Galicia CITIC, funded by Consellería de Educación, Universidade e Formación Profesional from Xunta de Galicia and European Union. (European Regional Development Fund - FEDER Galicia 2014-2020 Program) by grant ED431G 2019/01Xunta de Galicia; ED431G 2019/0

    On the quantitative estimation of short-term aging in human faces

    Get PDF
    Facial aging has been only partially studied in the past and mostly in a qualitative way. This paper presents a novel approach to the estimation of facial aging aimed to the quantitative evaluation of the changes in facial appearance over time. In particular, the changes both in face shape and texture, due to short-time aging, are considered. The developed framework exploits the concept of “distinctiveness” of facial features and the temporal evolution of such measure. The analysis is performed both at a global and local level to define the features which are more stable over time. Several experiments are performed on publicly available databases with image sequences densely sampled over a time span of several years. The reported results clearly show the potential of the methodology to a number of applications in biometric identification from human faces

    Fully Automatic Deep Convolutional Approaches for the Analysis of COVID-19 Using Chest X-Ray Images

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] Covid-19 is a new infectious disease caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Given the seriousness of the situation, the World Health Organization declared a global pandemic as the Covid-19 rapidly around the world. Among its applications, chest X-ray images are frequently used for an early diagnostic/screening of Covid-19 disease, given the frequent pulmonary impact in the patients, critical issue to prevent further complications caused by this highly infectious disease. In this work, we propose 4 fully automatic approaches for the classification of chest X-ray images under the analysis of 3 different categories: Covid-19, pneumonia and healthy cases. Given the similarity between the pathological impact in the lungs between Covid-19 and pneumonia, mainly during the initial stages of both lung diseases, we performed an exhaustive study of differentiation considering different pathological scenarios. To address these classification tasks, we evaluated 6 representative state-of-the-art deep network architectures on 3 different public datasets: (I) Chest X-ray dataset of the Radiological Society of North America (RSNA); (II) Covid-19 Image Data Collection; (III) SIRM dataset of the Italian Society of Medical Radiology. To validate the designed approaches, several representative experiments were performed using 6,070 chest X-ray radiographs. In general, satisfactory results were obtained from the designed approaches, reaching a global accuracy values of 0.9706 ± 0.0044, 0.9839 ± 0.0102, 0.9744 ± 0.0104 and 0.9744 ± 0.0104, respectively, thus helping the work of clinicians in the diagnosis and consequently in the early treatment of this relevant pandemic pathology.This research was funded by Instituto de Salud Carlos III, Government of Spain, DTS18/00136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095894-B-I00 research project; Ministerio de Ciencia e Innovación, Government of Spain through the research project with reference PID2019-108435RB-I00; Consellería de Cultura, Educación e Universidade, Xunta de Galicia, Spain through the postdoctoral grant contract ref. ED481B-2021-059; and Grupos de Referencia Competitiva, Spain, grant ref. ED431C 2020/24; Axencia Galega de Innovación (GAIN), Xunta de Galicia, Spain, grant ref. IN845D 2020/38; CITIC, as Research Center accredited by Galician University System, is funded by “Consellería de Cultura, Educación e Universidade from Xunta de Galicia, Spain”, supported in an 80% through ERDF Funds, ERDF Operational Programme Galicia 2014–2020, Spain, and the remaining 20% by “Secretaría Xeral de Universidades, Spain ” (Grant ED431G 2019/01). Funding for open access charge: Universidade da Coruña/CISUGXunta de Galicia; ED481B-2021-059Xunta de Galicia; ED431C 2020/24Xunta de Galicia; IN845D 2020/38Xunta de Galicia; ED431G 2019/0

    COVID-19 Lung Radiography Segmentation by Means of Multiphase Transfer Learning

    Get PDF
    Presented at the 4th XoveTIC Conference, A Coruña, Spain, 7–8 October 2021.[Abstract] COVID-19 is characterized by its impact on the respiratory system and, during the global outbreak of 2020, specific protocols had to be designed to contain its spread within hospitals. This required the use of portable X-ray devices that allow for a greater flexibility in terms of their arrangement in rooms not specifically designed for such purpose. However, their poor image quality, together with the subjectivity of the expert, can hinder the diagnosis process. Therefore, the use of automatic methodologies is advised. Even so, their development is challenging due to the scarcity of available samples. For this reason, we present a COVID-19-specific methodology able to segment these portable chest radiographs with a reduced number of samples via multiple transfer learning phases. This allows us to extract knowledge from two related fields and obtain a robust methodology with limited data from the target domain. Our proposal aims to help both experts and other computer-aided diagnosis systems to focus their attention on the region of interest, ignoring unrelated information.Instituto de Salud Carlos III, Government of Spain, DTS18/00136 research project; Ministerio de Ciencia e Innovación y Universidades, Government of Spain, RTI2018-095894-B-I00 research project, Ayudas para la formación de profesorado universitario (FPU), grant ref. FPU18/02271; Ministerio de Ciencia e Innovación, Government of Spain through the research project with reference PID2019-108435RB-I00; Consellería de Cultura, Educación e Universidade, Xunta de Galicia, Grupos de Referencia Competitiva, grant ref. ED431C 2020/24 and through the postdoctoral grant contract ref. ED481B 2021/059; Axencia Galega de Innovación (GAIN), Xunta de Galicia, grant ref. IN845D 2020/38; CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia, through the ERDF (80%) and Secretaría Xeral de Universidades (20%)Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED481B 2021/059Xunta de Galicia; IN845D 2020/38Xunta de Galicia; ED431G 2019/0

    Self-Supervised Multimodal Reconstruction Pre-training for Retinal Computer-Aided Diagnosis

    Get PDF
    Financiado para publicación en acceso aberto: Universidade da Coruña/CISUG[Abstract] Computer-aided diagnosis using retinal fundus images is crucial for the early detection of many ocular and systemic diseases. Nowadays, deep learning-based approaches are commonly used for this purpose. However, training deep neural networks usually requires a large amount of annotated data, which is not always available. In practice, this issue is commonly mitigated with different techniques, such as data augmentation or transfer learning. Nevertheless, the latter is typically faced using networks that were pre-trained on additional annotated data. An emerging alternative to the traditional transfer learning source tasks is the use of self-supervised tasks that do not require manually annotated data for training. In that regard, we propose a novel self-supervised visual learning strategy for improving the retinal computer-aided diagnosis systems using unlabeled multimodal data. In particular, we explore the use of a multimodal reconstruction task between complementary retinal imaging modalities. This allows to take advantage of existent unlabeled multimodal data in the medical domain, improving the diagnosis of different ocular diseases with additional domain-specific knowledge that does not rely on manual annotation. To validate and analyze the proposed approach, we performed several experiments aiming at the diagnosis of different diseases, including two of the most prevalent impairing ocular disorders: glaucoma and age-related macular degeneration. Additionally, the advantages of the proposed approach are clearly demonstrated in the comparisons that we perform against both the common fully-supervised approaches in the literature as well as current self-supervised alternatives for retinal computer-aided diagnosis. In general, the results show a satisfactory performance of our proposal, which improves existing alternatives by leveraging the unlabeled multimodal visual data that is commonly available in the medical field.This work is supported by Instituto de Salud Carlos III, Government of Spain, and the European Regional Development Fund (ERDF) of the European Union (EU) through the DTS18/00136 research project; Ministerio de Ciencia e Innovación, Government of Spain, through the RTI2018-095894-B-I00 and PID2019-108435RB-I00 research projects; Xunta de Galicia and the European Social Fund (ESF) of the EU through the predoctoral grant contract ref. ED481A-2017/328; Consellería de Cultura, Educación e Universidade, Xunta de Galicia, through Grupos de Referencia Competitiva, grant ref. ED431C 2020/24. CITIC, Centro de Investigación de Galicia ref. ED431G 2019/01, receives financial support from Consellería de Educación, Universidade e Formación Profesional, Xunta de Galicia , through the ERDF (80%) and Secretaría Xeral de Universidades (20%)Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED431G 2019/01Xunta de Galicia; ED481A-2017/32

    Paired and Unpaired Deep Generative Models on Multimodal Retinal Image Reconstruction

    Get PDF
    [Abstract] This work explores the use of paired and unpaired data for training deep neural networks in the multimodal reconstruction of retinal images. Particularly, we focus on the reconstruction of fluorescein angiography from retinography, which are two complementary representations of the eye fundus. The performed experiments allow to compare the paired and unpaired alternatives.Instituto de Salud Carlos III; DTS18/00136Ministerio de Ciencia, Innovación y Universidades; DPI2015-69948-RMinisterio de Ciencia, Innovación y Universidades; RTI2018-095894-B-I00Xunta de Galicia; ED431G/01Xunta de Galicia; ED431C 2016-047Xunta de Galicia; ED481A-2017/328

    Automatic Identification of Diabetic Macular Edema Using a Transfer Learning-Based Approach

    Get PDF
    [Abstract] This paper presents a complete system for the automatic identification of pathological Diabetic Macular Edema (DME) cases using Optical Coherence Tomography (OCT) images as source of information. To do so, the system extracts a set of deep features using a transfer learning-based approach from different fully-connected layers and different pre-trained Convolutional Neural Network (CNN) models. Next, the most relevant subset of deep features is identified using representative feature selection methods. Finally, a machine learning strategy is applied to train and test the potential of the identified deep features in the pathological classification process. Satisfactory results were obtained, demonstrating the suitability of the presented system to filter those pathological DME cases, helping the specialist to optimize their diagnostic procedures.Xunta de Galicia; ED431G/01Xunta de Galicia; ED431C 2016-047This work is supported by the Instituto de Salud Carlos III, Government of Spain and FEDER funds through the DTS18/00136 research project and by Ministerio de Ciencia, Innovación y Universidades, Government of Spain through the DPI2015-69948-R and RTI2018-095894-B-I00 research projects. Also, this work has received financial support from the European Union (European Regional Development Fund—ERDF) and the Xunta de Galicia, Centro singular de investigación de Galicia accreditation 2016–2019, Ref. ED431G/01; and Grupos de Referencia Competitiva, Ref. ED431C 2016-047

    Multivendor fully automatic uncertainty management approaches for the intuitive representation of DME fluid accumulations in OCT images

    Get PDF
    [Abstract]: Diabetes represents one of the main causes of blindness in developed countries, caused by fluid accumulations in the retinal layers. The clinical literature defines the different types of diabetic macular edema (DME) as cystoid macular edema (CME), diffuse retinal thickening (DRT), and serous retinal detachment (SRD), each with its own clinical relevance. These fluid accumulations do not present defined borders that facilitate segmentational approaches (specially the DRT type, usually not taken into account by the state of the art for this reason) so a diffuse paradigm is used for its detection and visualization. In this paper, we propose three novel approaches for the representation and characterization of these types of DME. A baseline proposal, using a convolutional neural network as backbone, another based on transfer learning from a general domain, and a third approach exploiting information of regions without a defined label. Overall, our baseline proposal obtained an AUC of 0.9583 ± 0.0093, the approach pretrained with a general-domain dataset an AUC of 0.9603 ± 0.0087, and the approach pretrained in the domain taking advantage of uncertainty, an AUC of 0.9619 ± 0.0073.Ministerio de Ciencia e Innovación; RTI2018-095894-B-I00Instituto de Salud Carlos III; DTS18/00136Ministerio de Ciencia e Innovación; FPU18/02271Ministerio de Ciencia e Innovación; PID2019-108435RB-I00Xunta de Galicia; ED431C 2020/24Xunta de Galicia; ED481B 2021/059Axencia Galega de Innovación; IN845D 2020/38Xunta de Galicia; ED431G 2019/0

    Intraretinal Fluid Detection by Means of a Densely Connected Convolutional Neural Network Using Optical Coherence Tomography Images

    Get PDF
    [Abstract] Hereby we present a methodology with the objective of detecting retinal fluid accumulations in between the retinal layers. The methodology uses a robust Densely Connected Neural Network to classify thousands of subsamples, extracted from a given Optical Coherence Tomography image. Posteriorly, using the detected regions, it satisfactorily generates a coherent and intuitive confidence map by means of a voting strategy.Xunta de Galicia; ED431G/01Xunta de Galicia; ED431C 2016-047Xunta de Galicia;ED481A-2019/196This research was funded by Instituto de Salud Carlos III grant number DTS18/00136, Ministerio de Economía y Competitividad grant number DPI 2015-69948-R, Xunta de Galicia through the accreditation of Centro Singular de Investigación 2016–2019, Ref. ED431G/01, Xunta de Galicia through Grupos de Referencia Competitiva, Ref. ED431C 2016-047 and Xunta de Galicia predoctoral grant contract ref. ED481A-2019/19
    corecore